对心脏周围环境的脂肪库的定量是评估与多种疾病相关的健康风险因素的准确程序。但是,由于人为的工作量,这种类型的评估并未在临床实践中广泛使用。这项工作提出了一种用于自动分割心脏脂肪垫的新技术。该技术基于将分类算法应用于心脏CT图像的分割。此外,我们广泛评估了几种算法在此任务上的性能,并讨论了提供了更好的预测模型。实验结果表明,心外膜和纵隔脂肪分类的平均准确性为98.4%,平均正面速率为96.2%。平均而言,关于分割的患者和地面真相的骰子相似性指数等于96.8%。因此,迄今为止,我们的技术已经获得了心脏脂肪自动分割的最准确结果。
translated by 谷歌翻译
恶意软件是对计算机系统的主要威胁,并对网络安全构成了许多挑战。有针对性的威胁(例如勒索软件)每年造成数百万美元的损失。恶意软件感染的不断增加一直激励流行抗病毒(AV)制定专用的检测策略,其中包括精心制作的机器学习(ML)管道。但是,恶意软件开发人员不断地将样品的功能更改为绕过检测。恶意软件样品的这种恒定演变导致数据分布(即概念漂移)直接影响ML模型检测率,这是大多数文献工作中未考虑的。在这项工作中,我们评估了两个Android数据集的概念漂移对恶意软件分类器的影响:DREBIN(约130k应用程序)和Androzoo(约350K应用程序)的子集。我们使用这些数据集训练自适应随机森林(ARF)分类器以及随机梯度下降(SGD)分类器。我们还使用其Virustotal提交时间戳订购了所有数据集样品,然后使用两种算法(Word2Vec和tf-idf)从其文本属性中提取功能。然后,我们进行了实验,以比较两个特征提取器,分类器以及四个漂移检测器(DDM,EDDM,ADWIN和KSWIN),以确定真实环境的最佳方法。最后,我们比较一些减轻概念漂移的可能方法,并提出了一种新的数据流管道,该管道同时更新分类器和特征提取器。为此,我们通过(i)对9年来收集的恶意软件样本进行了纵向评估(2009- 2018年),(ii)审查概念漂移检测算法以证明其普遍性,(iii)比较不同的ML方法来减轻此问题,(iv)提出了超过文献方法的ML数据流管道。
translated by 谷歌翻译
通常,基于生物谱系的控制系统可能不依赖于各个预期行为或合作适当运行。相反,这种系统应该了解未经授权的访问尝试的恶意程序。文献中提供的一些作品建议通过步态识别方法来解决问题。这些方法旨在通过内在的可察觉功能来识别人类,尽管穿着衣服或配件。虽然该问题表示相对长时间的挑战,但是为处理问题的大多数技术存在与特征提取和低分类率相关的几个缺点,以及其他问题。然而,最近的深度学习方法是一种强大的一组工具,可以处理几乎任何图像和计算机视觉相关问题,为步态识别提供最重要的结果。因此,这项工作提供了通过步态认可的关于生物识别检测的最近作品的调查汇编,重点是深入学习方法,强调他们的益处,暴露出弱点。此外,它还呈现用于解决相关约束的数据集,方法和体系结构的分类和表征描述。
translated by 谷歌翻译
最近的一些作品已经采用了决策树,以建造可解释的分区,旨在最大限度地减少$ k $ -means成本函数。然而,这些作品在很大程度上忽略了与所得到的树中叶子的深度相关的度量,这考虑到决策树的解释性如何取决于这些深度,这可能令人惊讶。为了填补文献中的这种差距,我们提出了一种有效的算法,它考虑了这些指标。在7个数据集上的实验中,我们的算法产生的结果比决策树聚类算法,例如\ Cite {dasgupta2020explainplainable},\ cite {frost2020exkmc},\ cite {laber2021price}和\ cite {dblp:conf / icml / Makarychevs21}通常以相当浅的树木实现较低或等同的成本。我们还通过简单适应现有技术来表明,用k $ -means成本函数的二叉树引起的可解释的分区的问题不承认多项式时间中的$(1+ \ epsilon)$ - 近似$ p = np $,证明Questies Quest attmation算法和/或启发式。
translated by 谷歌翻译
在这项工作中,我们提出并评估了一种新的增强学习方法,紧凑体验重放(编者),它使用基于相似转换集的复发的预测目标值的时间差异学习,以及基于两个转换的经验重放的新方法记忆。我们的目标是减少在长期累计累计奖励的经纪人培训所需的经验。它与强化学习的相关性与少量观察结果有关,即它需要实现类似于文献中的相关方法获得的结果,这通常需要数百万视频框架来培训ATARI 2600游戏。我们举报了在八个挑战街机学习环境(ALE)挑战游戏中,为仅10万帧的培训试验和大约25,000次迭代的培训试验中报告了培训试验。我们还在与基线的同一游戏中具有相同的实验协议的DQN代理呈现结果。为了验证从较少数量的观察结果近似于良好的政策,我们还将其结果与从啤酒的基准上呈现的数百万帧中获得的结果进行比较。
translated by 谷歌翻译
尽管高容量计算平台的可用性日益增长,但实施复杂性仍然是神经网络现实部署的重要问题。这种关注并不仅仅是由于最先进的网络体系结构的巨大成本,也是由于最近朝着边缘智能和嵌入式应用中使用神经网络的使用。在这种情况下,网络压缩技术由于能够降低部署成本的能力,同时将推断准确性保持在令人满意的水平,因此引起了兴趣。本文致力于开发针对神经网络的新型压缩方案。为此,首先开发了一种新的$ \ ell_0 $ -norm正规化方法,该方法能够在培训期间诱导网络中的强烈稀疏性。然后,可以通过修剪技术来瞄准训练有素的网络的较小权重,可以获得较小但高效的网络。提出的压缩方案还涉及使用$ \ ell_2 $ -Norm正则化以避免过度拟合以及进行微调以提高修剪网络的性能。提出了实验结果,目的是显示拟议方案的有效性,并与竞争方法进行比较。
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
Training agents via off-policy deep reinforcement learning (RL) requires a large memory, named replay memory, that stores past experiences used for learning. These experiences are sampled, uniformly or non-uniformly, to create the batches used for training. When calculating the loss function, off-policy algorithms assume that all samples are of the same importance. In this paper, we hypothesize that training can be enhanced by assigning different importance for each experience based on their temporal-difference (TD) error directly in the training objective. We propose a novel method that introduces a weighting factor for each experience when calculating the loss function at the learning stage. In addition to improving convergence speed when used with uniform sampling, the method can be combined with prioritization methods for non-uniform sampling. Combining the proposed method with prioritization methods improves sampling efficiency while increasing the performance of TD-based off-policy RL algorithms. The effectiveness of the proposed method is demonstrated by experiments in six environments of the OpenAI Gym suite. The experimental results demonstrate that the proposed method achieves a 33%~76% reduction of convergence speed in three environments and an 11% increase in returns and a 3%~10% increase in success rate for other three environments.
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译